113 research outputs found
MOSS: End-to-End Dialog System Framework with Modular Supervision
A major bottleneck in training end-to-end task-oriented dialog system is the
lack of data. To utilize limited training data more efficiently, we propose
Modular Supervision Network (MOSS), an encoder-decoder training framework that
could incorporate supervision from various intermediate dialog system modules
including natural language understanding, dialog state tracking, dialog policy
learning, and natural language generation. With only 60% of the training data,
MOSS-all (i.e., MOSS with supervision from all four dialog modules) outperforms
state-of-the-art models on CamRest676. Moreover, introducing modular
supervision has even bigger benefits when the dialog task has a more complex
dialog state and action space. With only 40% of the training data, MOSS-all
outperforms the state-of-the-art model on a complex laptop network
troubleshooting dataset, LaptopNetwork, that we introduced. LaptopNetwork
consists of conversations between real customers and customer service agents in
Chinese. Moreover, MOSS framework can accommodate dialogs that have supervision
from different dialog modules at both the framework level and model level.
Therefore, MOSS is extremely flexible to update in a real-world deployment
OpenDataVal: a Unified Benchmark for Data Valuation
Assessing the quality and impact of individual data points is critical for
improving model performance and mitigating undesirable biases within the
training dataset. Several data valuation algorithms have been proposed to
quantify data quality, however, there lacks a systemic and standardized
benchmarking system for data valuation. In this paper, we introduce
OpenDataVal, an easy-to-use and unified benchmark framework that empowers
researchers and practitioners to apply and compare various data valuation
algorithms. OpenDataVal provides an integrated environment that includes (i) a
diverse collection of image, natural language, and tabular datasets, (ii)
implementations of eleven different state-of-the-art data valuation algorithms,
and (iii) a prediction model API that can import any models in scikit-learn.
Furthermore, we propose four downstream machine learning tasks for evaluating
the quality of data values. We perform benchmarking analysis using OpenDataVal,
quantifying and comparing the efficacy of state-of-the-art data valuation
approaches. We find that no single algorithm performs uniformly best across all
tasks, and an appropriate algorithm should be employed for a user's downstream
task. OpenDataVal is publicly available at https://opendataval.github.io with
comprehensive documentation. Furthermore, we provide a leaderboard where
researchers can evaluate the effectiveness of their own data valuation
algorithms.Comment: 25 pages, NeurIPS 2023 Track on Datasets and Benchmark
GPT detectors are biased against non-native English writers
The rapid adoption of generative language models has brought about
substantial advancements in digital communication, while simultaneously raising
concerns regarding the potential misuse of AI-generated content. Although
numerous detection methods have been proposed to differentiate between AI and
human-generated content, the fairness and robustness of these detectors remain
underexplored. In this study, we evaluate the performance of several
widely-used GPT detectors using writing samples from native and non-native
English writers. Our findings reveal that these detectors consistently
misclassify non-native English writing samples as AI-generated, whereas native
writing samples are accurately identified. Furthermore, we demonstrate that
simple prompting strategies can not only mitigate this bias but also
effectively bypass GPT detectors, suggesting that GPT detectors may
unintentionally penalize writers with constrained linguistic expressions. Our
results call for a broader conversation about the ethical implications of
deploying ChatGPT content detectors and caution against their use in evaluative
or educational settings, particularly when they may inadvertently penalize or
exclude non-native English speakers from the global discourse
- …